Support GitHub Copilot CLI and improve CLI accounting#141
Support GitHub Copilot CLI and improve CLI accounting#141mike1858 merged 7 commits intoPiebald-AI:mainfrom
Conversation
Agent-Logs-Url: https://github.com/zhangzqs/splitrail/sessions/4ee476d9-1122-4625-b054-0d5921bb214e Co-authored-by: zhangzqs <34616640+zhangzqs@users.noreply.github.com>
… precise token tracking Separate Copilot CLI parsing from the VS Code analyzer into its own module, add exact output token counts, context compaction handling, and model-based cost calculation.
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
📝 WalkthroughWalkthroughImplements a new GitHub Copilot CLI analyzer parsing JSONL session logs, integrates it into the analyzer registry and UI strings, expands model catalog/aliases, and refactors message aggregation to count AI messages by assistant role rather than model presence. Changes
Sequence Diagram(s)sequenceDiagram
participant FS as File System (~/ .copilot)
participant Discovery as Data Discovery
participant Parser as JSONL Parser
participant Reconstructor as Event Reconstructor
participant Calculator as Token & Cost Calculator
participant Output as ConversationEmitter
FS->>Discovery: Locate session-state / history-session-state
Discovery->>Parser: Open matched events.jsonl
Parser->>Reconstructor: Stream parsed JSON events
Reconstructor->>Reconstructor: Correlate user.message ↔ assistant/tool events
Reconstructor->>Calculator: Collect sessionId, cwd/gitRoot, message boundaries
Calculator->>Calculator: Estimate tokens (input/output/cache), accumulate tool stats
Calculator->>Calculator: Apply shutdown metrics or static overhead for cost
Calculator->>Output: Emit ConversationMessage items with hashes and stats
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (2)
src/analyzers/copilot_cli.rs (2)
541-624: Consider grouping parameters into a context struct.The
flush_copilot_cli_turnfunction has 9 parameters. While the#[allow(clippy::too_many_arguments)]annotation acknowledges this, grouping related parameters (likeconversation_hash,project_hash,session_name) into a context struct would improve readability and make future modifications easier.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/analyzers/copilot_cli.rs` around lines 541 - 624, The function flush_copilot_cli_turn currently takes many related params—group conversation_hash, project_hash, session_name (and optionally user_index/assistant_index and live_context) into a small context struct to reduce argument count and improve readability; create a CopilotCliFlushContext (or similar) containing conversation_hash: String (or &str reference type matching callers), project_hash: String, session_name: Option<String>, and if helpful mutable indices user_index: usize and assistant_index: usize (or keep indices separate if you prefer), update flush_copilot_cli_turn signature to accept &mut CopilotCliFlushContext (and keep entries, current_turn, live_context, pending_user as before), then update all call sites to build/pass that context struct and adjust usage inside flush_copilot_cli_turn to reference context.conversation_hash, context.project_hash, context.session_name, and context.assistant_index (or &mut assistant_index) accordingly.
751-755: UsingUtc::now()for missing timestamps may affect reproducibility.When an event lacks a timestamp, the current time is used as a fallback. This means parsing the same file at different times could produce different
datevalues for those messages. Consider logging a warning or using a sentinel value if reproducibility is a concern.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/analyzers/copilot_cli.rs` around lines 751 - 755, The code sets pending_user.date via event_timestamp.unwrap_or_else(Utc::now), which makes parsing non-deterministic; change CopilotCliPendingUser.date to use a reproducible sentinel or optional timestamp instead of calling Utc::now(), and emit a warning when the event timestamp is missing; specifically, update the creation of pending_user (the pending_user assignment and CopilotCliPendingUser struct) to accept either Option<DateTime<Utc>> or a fixed sentinel (e.g., epoch) and add a log call when event_timestamp.is_none() so missing timestamps are recorded for debugging and reproducibility.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/tui.rs`:
- Line 1177: Update the no-data screen label that currently reads "GitHub
Copilot" to "GitHub Copilot (VS Code)" so it matches the analyzer split and
README wording; locate the string literal in src/tui.rs (the quoted message
shown) and replace that occurrence, and also search for other user-facing
occurrences of the plain "GitHub Copilot" label in the UI/strings to make the
naming consistent with "GitHub Copilot (VS Code)".
---
Nitpick comments:
In `@src/analyzers/copilot_cli.rs`:
- Around line 541-624: The function flush_copilot_cli_turn currently takes many
related params—group conversation_hash, project_hash, session_name (and
optionally user_index/assistant_index and live_context) into a small context
struct to reduce argument count and improve readability; create a
CopilotCliFlushContext (or similar) containing conversation_hash: String (or
&str reference type matching callers), project_hash: String, session_name:
Option<String>, and if helpful mutable indices user_index: usize and
assistant_index: usize (or keep indices separate if you prefer), update
flush_copilot_cli_turn signature to accept &mut CopilotCliFlushContext (and keep
entries, current_turn, live_context, pending_user as before), then update all
call sites to build/pass that context struct and adjust usage inside
flush_copilot_cli_turn to reference context.conversation_hash,
context.project_hash, context.session_name, and context.assistant_index (or &mut
assistant_index) accordingly.
- Around line 751-755: The code sets pending_user.date via
event_timestamp.unwrap_or_else(Utc::now), which makes parsing non-deterministic;
change CopilotCliPendingUser.date to use a reproducible sentinel or optional
timestamp instead of calling Utc::now(), and emit a warning when the event
timestamp is missing; specifically, update the creation of pending_user (the
pending_user assignment and CopilotCliPendingUser struct) to accept either
Option<DateTime<Utc>> or a fixed sentinel (e.g., epoch) and add a log call when
event_timestamp.is_none() so missing timestamps are recorded for debugging and
reproducibility.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: fc410277-c73f-425d-ab4b-eed5041508f1
📒 Files selected for processing (17)
README.mdsrc/analyzers/copilot.rssrc/analyzers/copilot_cli.rssrc/analyzers/mod.rssrc/analyzers/tests/copilot.rssrc/analyzers/tests/copilot_cli.rssrc/analyzers/tests/mod.rssrc/contribution_cache/single_session.rssrc/main.rssrc/mcp/server.rssrc/models.rssrc/tui.rssrc/tui/logic.rssrc/types.rssrc/utils.rssrc/utils/tests.rsvscode-splitrail/README.md
…upport-copilot-cli-statistics
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/models.rs (1)
972-988:⚠️ Potential issue | 🔴 CriticalCorrect M2.5 output pricing from $1.10 to $1.2 per 1M tokens.
M2.1 pricing of
$0.30 / $1.20is verified against official MiniMax API documentation and correctly markedis_estimated: false. However, M2.5 output pricing in the code is incorrect—official documentation shows M2.5 has identical pricing to M2.1 at$0.3 / $1.2per 1M tokens, not$1.10as currently set.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/models.rs` around lines 972 - 988, Update the MiniMax M2.5 entry so its pricing matches M2.1: in the mapping key "minimax-m2.5" (ModelInfo) change the PricingStructure::Flat output_per_1m from 1.10 to 1.20 so the pair becomes input_per_1m: 0.30 and output_per_1m: 1.20, leaving caching (CachingSupport::None) and is_estimated: false unchanged.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/models.rs`:
- Around line 1000-1008: The current addition of a literal "auto" ModelInfo
entry (PricingStructure::Flat(...), CachingSupport::None, is_estimated: true)
suppresses the existing warn_once("Unknown model: ...") path and can silently
make a real billed model named "auto" effectively free; remove or rework this
entry so real provider-reported "auto" still hits the unknown-model warning.
Either revert the "auto" map entry and handle routing selectors earlier in the
resolution code, or introduce a distinct flag/variant on ModelInfo (e.g.,
is_routing_selector or KnownButZero) and update the unknown-model warn logic to
only suppress warnings for entries marked as routing selectors while still
emitting a single debug/info log for known-zero models; reference the "auto"
mapping, ModelInfo, PricingStructure::Flat, CachingSupport::None, and
is_estimated when making the change.
---
Outside diff comments:
In `@src/models.rs`:
- Around line 972-988: Update the MiniMax M2.5 entry so its pricing matches
M2.1: in the mapping key "minimax-m2.5" (ModelInfo) change the
PricingStructure::Flat output_per_1m from 1.10 to 1.20 so the pair becomes
input_per_1m: 0.30 and output_per_1m: 1.20, leaving caching
(CachingSupport::None) and is_estimated: false unchanged.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes

Summary
What changed
GitHub Copilot CLIanalyzer andApplication::CopilotClisession.shutdown.data.modelMetricsusage to completed sessionsReason TksfromreasoningTextwithout changing the existing output/cost semanticsValidation
cargo fmt --all --quietcargo build --quietcargo test --quietcargo clippy --quiet -- -D warningscargo doc --quietNotes
Summary by CodeRabbit
New Features
Bug Fixes
Documentation